Goto

Collaborating Authors

 train robot


Meet the Chinese Startup Using AI--and a Small Army of Workers--to Train Robots

WIRED

AgiBot is using AI-powered robots to do new manufacturing tasks. Smarter machines may transform physical labor in China. AgiBot, a humanoid robotics company based in Shanghai, has engineered a way for two-armed robots to learn manufacturing tasks through human training and real-world practice on a factory production line. The company says its system, which combines teleoperation and reinforcement learning, is being tested on a production line belonging to Longcheer Technology, a Chinese company that manufactures smartphones, VR headsets, and other electronic gadgets. AgiBot's project shows how more advanced AI is starting to change the abilities of industrial machines--an innovation that may creep into new areas of manufacturing in China and elsewhere.


Robot see, robot do: System learns after watching how-tos

Robohub

Kushal Kedia (left) and Prithwish Dan (right) are members of the development team behind RHyME, a system that allows robots to learn tasks by watching a single how-to video. Cornell researchers have developed a new robotic framework powered by artificial intelligence – called RHyME (Retrieval for Hybrid Imitation under Mismatched Execution) – that allows robots to learn tasks by watching a single how-to video. RHyME could fast-track the development and deployment of robotic systems by significantly reducing the time, energy and money needed to train them, the researchers said. "One of the annoying things about working with robots is collecting so much data on the robot doing different tasks," said Kushal Kedia, a doctoral student in the field of computer science and lead author of a corresponding paper on RHyME. "That's not how humans do tasks. We look at other people as inspiration."


Nvidia unveils new products from supercharged graphics chip to AI that trains robots

Christian Science Monitor | Science

In a packed Las Vegas arena, Nvidia founder Jensen Huang stood on stage and marveled over the crisp real-time computer graphics displayed on the screen behind him. He watched as a dark-haired woman walked through ornate gilded double doors and took in the rays of light that poured in through stained glass windows. "The amount of geometry that you saw was absolutely insane," Mr. Huang told an audience of thousands at CES 2025 the night of Jan. 6. "It would have been impossible without artificial intelligence." The chipmaker and AI darling unveiled its GeForce RTX 50 Series desktop and laptop GPUs – its most advanced consumer graphics processor units for gamers, creators, and developers.


A way to let robots learn by listening will make them more useful

MIT Technology Review

Researchers at the Robotics and Embodied AI Lab at Stanford University set out to change that. They first built a system for collecting audio data, consisting of a GoPro camera and a gripper with a microphone designed to filter out background noise. Human demonstrators used the gripper for a variety of household tasks and then used this data to teach robotic arms how to execute the task on their own. The team's new training algorithms help robots gather clues from audio signals to perform more effectively. "Thus far, robots have been training on videos that are muted," says Zeyi Liu, a PhD student at Stanford and lead author of the study.


The robot race is fueling a fight for training data

MIT Technology Review

Roboticists believe that by using new AI techniques, they will achieve something the field has pined after for decades: more capable robots that can move freely through unfamiliar environments and tackle challenges they've never seen before. "It's like being strapped to the front of a rocket," says Russ Tedrake, vice president of robotics research at the Toyota Research Institute, says of the field's pace right now. Tedrake says he has seen plenty of hype cycles rise and fall, but none like this one. "I've been in the field for 20-some years. This is different," he says.


Amazon wants you to help train robots by playing a video game

New Scientist

Amazon has created a video game called Alexa Arena in which you interact with virtual robots. It is designed to gather data to train robots on how to behave around humans, but there are doubts over how many people will actually play the game. Amazon has industrial robots for its vast warehouses but also develops consumer devices such as the Astro home robot, which launched in 2021.


Why household robot servants are a lot harder to build than robotic vacuums and automated warehouse workers

Robohub

Who wouldn't want a robot to handle all the household drudgery? With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores. Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.'s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.


DayDreamer: An algorithm to quickly teach robots new behaviors in the real world

#artificialintelligence

Training robots to complete tasks in the real-world can be a very time-consuming process, which involves building a fast and efficient simulator, performing numerous trials on it, and then transferring the behaviors learned during these trials to the real world. In many cases, however, the performance achieved in simulations does not match the one attained in the real-world, due to unpredictable changes in the environment or task. Researchers at the University of California, Berkeley (UC Berkeley) have recently developed DayDreamer, a tool that could be used to train robots to complete real-world tasks more effectively. Their approach, introduced in a paper pre-published on arXiv, is based on learning models of the world that allow robots to predict the outcomes of their movements and actions, reducing the need for extensive trial and error training in the real-world. "We wanted to build robots that continuously learn directly in the real world, without having to create a simulation environment," Danijar Hafner, one of the researchers who carried out the study, told TechXplore.


A new framework that could simplify imitation learning in robotics

#artificialintelligence

Over the past few decades, computer scientists have been trying to train robots to tackle a variety of tasks, including house chores and manufacturing processes. One of the most renowned strategies used to train robots on manual tasks is imitation learning. As suggested by its name, imitation learning entails teaching a robot how to do something using human demonstrations. While in some studies this training strategy achieved very promising results, it often requires large and annotated datasets containing hundreds of videos where humans complete a given task. Researchers at New York University have recently developed VINN, an alternative imitation learning framework that does not necessarily require large training datasets.


These robots can move your couch: Researchers develop robots that can work independently but cooperatively

#artificialintelligence

If you've ever helped someone move furniture, you know it takes coordination -- simultaneously pushing or pulling and reacting based on what your helper is doing. That makes it an ideal problem to examine collaboration between robots, said Andrew Barth, a doctoral student in UC's College of Engineering and Applied Science. "It's a good metaphor for cooperation," Barth said. In the Intelligent Robotics and Autonomous Systems Lab of UC aerospace engineering professor Ou Ma, student researchers developed artificial intelligence to train robots to work together to move a couch -- or in this case a long rod that served as a stand-in -- around two obstacles and through a narrow door in computer simulations. "We made it a little more difficult on ourselves. We want to accomplish the task with as little communication as possible among the robots," student Barth said.